nnet3: removing add_lda and include_log_softmax options#1573
nnet3: removing add_lda and include_log_softmax options#1573vimalmanohar wants to merge 3 commits intokaldi-asr:masterfrom
Conversation
|
Thanks a lot... I'm afraid I am working on a bunch of other changes that may conflict with this so, so I think it's better if we wait and have you commit this in a day or two. I probably won't need the lda-related code as I think already resolved that issue everywhere, but the other part is relevant. |
|
OK I have committed those other changes I was talking about now. you'll probably have conflicts. |
|
@vimalmanohar, can you please try to resolve the conflicts in this PR? |
|
@vimalmanohar, sorry, I dropped the ball on this, and new conflicts have appeared. Would you mind resolving them? |
| # use during decoding | ||
| common_train_lib.copy_egs_properties_to_exp_dir(egs_dir, args.dir) | ||
|
|
||
| if (args.stage <= -3) and os.path.exists(args.dir+"/configs/init.config"): |
There was a problem hiding this comment.
Are there other reasons we use init.config other than to make the LDA-like transform?
|
|
||
| if (args.stage <= -3) and os.path.exists(args.dir+"/configs/init.config"): | ||
| add_lda = common_train_lib.is_lda_added(config_dir) | ||
|
|
There was a problem hiding this comment.
Are there other reasons we use init.config other than to make the LDA-like transform?
There was a problem hiding this comment.
Probably not. It seems like to be removed in the transfer learning PR.
There was a problem hiding this comment.
init.config is still used to create the initial model. The check is needed to know if the LDA needs to be trained.
There was a problem hiding this comment.
did you see my comment:
"I think you are mistaken, if we are talking about the current kaldi_52
code; init.config is only used if we are doing the LDA thing."
can you please update the PR?
There was a problem hiding this comment.
I do not understand what needs to be done. The current solution is needed to know if LDA needs to be trained. The function is_lda_added can be changed to read init.raw if needed.
|
In that case can we just keep the current code instead of adding a new
function to check it?
…On Fri, May 26, 2017 at 2:55 PM, Vimal Manohar ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In egs/wsj/s5/steps/nnet3/train_raw_dnn.py
<#1573 (comment)>:
> @@ -286,7 +288,9 @@ def train(args, run_opts):
# use during decoding
common_train_lib.copy_egs_properties_to_exp_dir(egs_dir, args.dir)
- if (args.stage <= -3) and os.path.exists(args.dir+"/configs/init.config"):
+ add_lda = common_train_lib.is_lda_added(config_dir)
+
Probably not. It seems like to be removed in the transfer learning PR.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1573 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADJVu2VSK0OrhUgcE9QPkevqvS-cs-1dks5r9yAngaJpZM4NG2Cg>
.
|
|
I think you are mistaken, if we are talking about the current kaldi_52
code; init.config is only used if we are doing the LDA thing.
…On Fri, May 26, 2017 at 3:14 PM, Vimal Manohar ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In egs/wsj/s5/steps/nnet3/train_raw_dnn.py
<#1573 (comment)>:
> @@ -286,7 +288,9 @@ def train(args, run_opts):
# use during decoding
common_train_lib.copy_egs_properties_to_exp_dir(egs_dir, args.dir)
- if (args.stage <= -3) and os.path.exists(args.dir+"/configs/init.config"):
+ add_lda = common_train_lib.is_lda_added(config_dir)
+
init.config is still used to create the initial model. The check is needed
to know if the LDA needs to be trained.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1573 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADJVu2R2sgzG6FWoRcCj3o-QkI7Z1mX2ks5r9ySrgaJpZM4NG2Cg>
.
|
|
Look at the current xconfig_to_configs.py in kaldi_52: it skips creating
the init.config if there are no output nodes in init.config, and the only
layer that adds an output node there (I believe) is the LDA layer. So I
believe the script only needs to check for the existence of the
init.config. the add_lda variable is not needed at all.
…On Mon, May 29, 2017 at 5:39 PM, Vimal Manohar ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In egs/wsj/s5/steps/nnet3/train_raw_dnn.py
<#1573 (comment)>:
> @@ -286,7 +288,9 @@ def train(args, run_opts):
# use during decoding
common_train_lib.copy_egs_properties_to_exp_dir(egs_dir, args.dir)
- if (args.stage <= -3) and os.path.exists(args.dir+"/configs/init.config"):
+ add_lda = common_train_lib.is_lda_added(config_dir)
+
I do not understand what needs to be done. The current solution is needed
to know if LDA needs to be trained. The function is_lda_added can be
changed to read init.raw if needed.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1573 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADJVu6FPJIprQrtPvtIxDpEFA64eKhiBks5r-zsXgaJpZM4NG2Cg>
.
|
|
Closing this as I'm working on a version of this myself, along with certain other code cleanups. |
No description provided.